13 research outputs found

    Inverse Kinematics with Forward Dynamics Solvers for Sampled Motion Tracking

    Full text link
    Tracking Cartesian trajectories with end-effectors is a fundamental task in robot control. For motion that is not known a priori, the solvers must find fast solutions to the inverse kinematics (IK) problem for discretely sampled target poses. On joint control level, however, the robot's actuators operate in a continuous domain, requiring smooth transitions between individual states. In this work we present a boost to the well-known Jacobian transpose method to address this goal, using the mass matrix of a virtually conditioned twin of the manipulator. Results on the UR10 show superior convergence and quality of our dynamics-based solver against the plain Jacobian method. Our algorithm is straightforward to implement as a controller, using present robotics libraries.Comment: 6 pages, 8 figure

    Haptic Rendering of Arbitrary Serial Manipulators for Robot Programming

    Get PDF

    Neuromorphic stereo vision: A survey of bio-inspired sensors and algorithms

    Get PDF
    Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint—time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereo vision

    A Framework for Coupled Simulations of Robots and Spiking Neuronal Networks

    Get PDF
    Bio-inspired robots still rely on classic robot control although advances in neurophysiology allow adaptation to control as well. However, the connection of a robot to spiking neuronal networks needs adjustments for each purpose and requires frequent adaptation during an iterative development. Existing approaches cannot bridge the gap between robotics and neuroscience or do not account for frequent adaptations. The contribution of this paper is an architecture and domain-specific language (DSL) for connecting robots to spiking neuronal networks for iterative testing in simulations, allowing neuroscientists to abstract from implementation details. The framework is implemented in a web-based platform. We validate the applicability of our approach with a case study based on image processing for controlling a four-wheeled robot in an experiment setting inspired by Braitenberg vehicles

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    Soft-Grasping With an Anthropomorphic Robotic Hand Using Spiking Neurons

    Get PDF
    Evolution gave humans advanced grasping capabilities combining an adaptive hand with efficient control. Grasping motions can quickly be adapted if the object moves or deforms. Soft-grasping with an anthropomorphic hand is a great capability for robots interacting with objects shaped for humans. Nevertheless, most robotic applications use vacuum, 2-finger or custom made grippers. We present a biologically inspired spiking neural network (SNN) for soft-grasping to control a robotic hand. Two control loops are combined, one from motor primitives and one from a compliant controller activated by a reflex. The finger primitives represent synergies between joints and hand primitives represent different affordances. Contact is detected with a mechanism based on inter-neuron circuits in the spinal cord to trigger reflexes. A Schunk SVH 5-finger hand was used to grasp objects with different shapes, stiffness and sizes. The SNN adapted the grasping motions without knowing the exact properties of the objects. The compliant controller with online learning proved to be sensitive, allowing even the grasping of balloons. In contrast to deep learning approaches, our SNN requires one example of each grasping motion to train the primitives. Computation of the inverse kinematics or complex contact point planning is not required. This approach simplifies the control and can be used on different robots providing similar adaptive features as a human hand. A physical imitation of a biological system implemented completely with SNN and a robotic hand can provide new insights into grasping mechanisms

    ROS engineering workbench based on semantically enriched app models for improved reusability

    No full text
    In this work, the ReApp Engineering Workbench and its underlying semantically enriched app models are presented. The usage of a model, which describes the apps functionality, interfaces and other attributes, allows the utilization of engineering tools for code generation and automated testing. Further, it ensures the compatibility of the generated interfaces, which in turn enhances the reusability of the developed apps in larger applications

    Embodied Neuromorphic Vision with Continuous Random Backpropagation

    Get PDF
    Spike-based communication between biological neurons is sparse and unreliable. This enables the brain to process visual information from the eyes efficiently. Taking inspiration from biology, artificial spiking neural networks coupled with silicon retinas attempt to model these computations. Recent findings in machine learning allowed the derivation of a family of powerful synaptic plasticity rules approximating backpropagation for spiking networks. Are these rules capable of processing real-world visual sensory data? In this paper, we evaluate the performance of Event-Driven Random Back-Propagation (eRBP) at learning representations from event streams provided by a Dynamic Vision Sensor (DVS). First, we show that eRBP matches state-of-the-art performance on the DvsGesture dataset with the addition of a simple covert attention mechanism. By remapping visual receptive fields relatively to the center of the motion, this attention mechanism provides translation invariance at low computational cost compared to convolutions. Second, we successfully integrate eRBP in a real robotic setup, where a robotic arm grasps objects according to detected visual affordances. In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements. We show that our method classifies affordances within 100ms after microsaccade onset, which is comparable to human performance reported in behavioral study. Our results suggest that advances in neuromorphic technology and plasticity rules enable the development of autonomous robots operating at high speed and low energy consumption
    corecore